68 research outputs found
Data Poisoning Attacks in Contextual Bandits
We study offline data poisoning attacks in contextual bandits, a class of
reinforcement learning problems with important applications in online
recommendation and adaptive medical treatment, among others. We provide a
general attack framework based on convex optimization and show that by slightly
manipulating rewards in the data, an attacker can force the bandit algorithm to
pull a target arm for a target contextual vector. The target arm and target
contextual vector are both chosen by the attacker. That is, the attacker can
hijack the behavior of a contextual bandit. We also investigate the feasibility
and the side effects of such attacks, and identify future directions for
defense. Experiments on both synthetic and real-world data demonstrate the
efficiency of the attack algorithm.Comment: GameSec 201
Task-agnostic Exploration in Reinforcement Learning
Efficient exploration is one of the main challenges in reinforcement learning
(RL). Most existing sample-efficient algorithms assume the existence of a
single reward function during exploration. In many practical scenarios,
however, there is not a single underlying reward function to guide the
exploration, for instance, when an agent needs to learn many skills
simultaneously, or multiple conflicting objectives need to be balanced. To
address these challenges, we propose the \textit{task-agnostic RL} framework:
In the exploration phase, the agent first collects trajectories by exploring
the MDP without the guidance of a reward function. After exploration, it aims
at finding near-optimal policies for tasks, given the collected
trajectories augmented with \textit{sampled rewards} for each task. We present
an efficient task-agnostic RL algorithm, \textsc{UCBZero}, that finds
-optimal policies for arbitrary tasks after at most exploration episodes. We also provide an
lower bound, showing that the
dependency on is unavoidable. Furthermore, we provide an -independent
sample complexity bound of \textsc{UCBZero} in the statistically easier setting
when the ground truth reward functions are known
A Unified Approximation Framework for Compressing and Accelerating Deep Neural Networks
Deep neural networks (DNNs) have achieved significant success in a variety of
real world applications, i.e., image classification. However, tons of
parameters in the networks restrict the efficiency of neural networks due to
the large model size and the intensive computation. To address this issue,
various approximation techniques have been investigated, which seek for a light
weighted network with little performance degradation in exchange of smaller
model size or faster inference. Both low-rankness and sparsity are appealing
properties for the network approximation. In this paper we propose a unified
framework to compress the convolutional neural networks (CNNs) by combining
these two properties, while taking the nonlinear activation into consideration.
Each layer in the network is approximated by the sum of a structured sparse
component and a low-rank component, which is formulated as an optimization
problem. Then, an extended version of alternating direction method of
multipliers (ADMM) with guaranteed convergence is presented to solve the
relaxed optimization problem. Experiments are carried out on VGG-16, AlexNet
and GoogLeNet with large image classification datasets. The results outperform
previous work in terms of accuracy degradation, compression rate and speedup
ratio. The proposed method is able to remarkably compress the model (with up to
4.9x reduction of parameters) at a cost of little loss or without loss on
accuracy.Comment: 8 pages, 5 figures, 6 table
- …